Digital Analog Converter design project

About running apps:

  • If you have Mathematica(R), download the .nb APP files.
  • If you don't have Mathematica(R), you will need to download the free  Wolfram CDF player . Then download .cdf  APP files and run.

page 1


 (continued from page 2)

Additional notes

Why do we have to test the DAC with linear regression? Isn't just building the circuit good enough for the project?

First, here is the app that you can refresh your memory or learn about linear regression:
The app is used to generate the graphics discussed below.

Suppose you build a robot and need a distance sensor. You purchase one, and it gives a voltage output as a function of distance. Your Arduino or Raspie can convert the analog voltage into a number using an ADC, which is the reverse of your DAC.
Is having the sensor output voltage read by the microprocessor good enough?

Not really, you need to know what voltage corresponding to what distance. Say, it gives a reading of 1.75 V when you check how far it is from the wall. What does 1.75 V mean? You can't program the robot to move with that number. You need the actual distance, like 4.5 meter for example. What do we need to do?

Obviously, we have to calibrate the sensor. Below is our data: it plots the sensor output vs distance we set.





What we can do is to eyeball the data and sketch a straight line through them like the green line above. It says:
                            vout = 0.196 + 0.858 * x
where x is the distance.

But eyeballing is not the most accurate approach, hence we do linear regression, and get the result below:




                                    vout = 0.214 + 0.793 * x

It is numerically more rigorous than our previous estimate. But are we happy with it? something doesn't seem good: the data points are scattered too wildly. We test another sensor and get the result below:



We get:           
          vout = 0.234 + 0.817 * x


The difference between the two is:
        - for the intercept:  (0.214-0.234)/(0.5 (
0.214+0.234)) = -9 %
        - for the slope:  (0.793-0.817)/(0.5 (
0.793+0.817))   =  -3 %


Since the slope is more important, and suppose the specification says we only need to measure the distance within +-5% uncertainty, it appears that -3% difference between them is more than good enough and we conclude both sensors agree with each other within 5% tolerance, and either one is fine.

Really? Not really!

Something tells us that the second sensor is perhaps more precise than the first one. The deviation of the data from the best-fit curve appears to be smaller, which is better for the second one. How do we determine this? By doing statistical analysis (you can get plots like those below from the app).



 

What the little "golden hill" above (aka the joint normal distribution of the slope and intercept estimates) tells us is the confidence of our calibration for each sensor. The narrower the hill is, the more confidence there is. Hence, the calibration of the second sensor, the left hand side, obviously
has a much higher confidence level than that of the first sensor, the right hand side. Look at the little table above each graph. The "standard-error" columns say:

2nd sensor: the slope is 0.817 +- 0.037
1st sensor:                     0.793 +- 0.092

This means the 2nd sensor with an expected error of 0.037 is 2.5 times smaller - hence "more credible" than the 1st one which has an error of 0.092. And suppose the vendor offers you a 3rd type of sensor with the performance below. Which one of the three would you choose for your robot?

  

Hands down, you would choose the 3rd one.

Same thing with the DAC in your project. Let's say one scenario that you discover is that it turns out the sensors aren't so bad, but the ADC (analog-digital-converter used by the microprocessor) isn't precise after all. It is the cause of the data error as well, would you keep it or replace it? There is no point of buying a great sensor and use it with a lousy ADC, is there?

Which one of us wants to ditch your retina-resolution display of 2560 x 1600 pixels and 32-bit (2^32 = 4.3 million) colors to go back to the good old time of this? (8-bit Atari and 256 colors)

Image result for
                                              atari 8 bit games

How about listening to 
8-bit instead of 24-bit audio itune music?

Probably not. But what is the point of having these great high-resolution technologies if the DAC works poorly with low precision? It is the same as buying a great, high-precision sensor to be read by a low-precision ADC. Hence, it is necessary and essential that you test how good your DAC is.


So, your DAC project must demonstrate how well it works: how accurate and precise it converts a digital number into a voltage. And like testing the sensors above, the only way to tell is to take accurate measurements and do statistical analysis.
The lowest resolution of the 8-bit DAC is 1/256 = 0.0039. It means we must measure with at least 4 digits. We have the high-precision high-accuracy DMM in the lab to do the measurements.
 

This is the App to do regression of your data. It was demonstrated in the lecture.





 

Subpages (?): page 1 page 2 page 4